31 research outputs found

    Distributed stochastic optimization via matrix exponential learning

    Get PDF
    In this paper, we investigate a distributed learning scheme for a broad class of stochastic optimization problems and games that arise in signal processing and wireless communications. The proposed algorithm relies on the method of matrix exponential learning (MXL) and only requires locally computable gradient observations that are possibly imperfect and/or obsolete. To analyze it, we introduce the notion of a stable Nash equilibrium and we show that the algorithm is globally convergent to such equilibria - or locally convergent when an equilibrium is only locally stable. We also derive an explicit linear bound for the algorithm's convergence speed, which remains valid under measurement errors and uncertainty of arbitrarily high variance. To validate our theoretical analysis, we test the algorithm in realistic multi-carrier/multiple-antenna wireless scenarios where several users seek to maximize their energy efficiency. Our results show that learning allows users to attain a net increase between 100% and 500% in energy efficiency, even under very high uncertainty.Comment: 31 pages, 3 figure

    Contrastive Learning for Online Semi-Supervised General Continual Learning

    Full text link
    We study Online Continual Learning with missing labels and propose SemiCon, a new contrastive loss designed for partly labeled data. We demonstrate its efficiency by devising a memory-based method trained on an unlabeled data stream, where every data added to memory is labeled using an oracle. Our approach outperforms existing semi-supervised methods when few labels are available, and obtain similar results to state-of-the-art supervised methods while using only 2.6% of labels on Split-CIFAR10 and 10% of labels on Split-CIFAR100.Comment: Accepted at ICIP'2

    Domain-Aware Augmentations for Unsupervised Online General Continual Learning

    Full text link
    Continual Learning has been challenging, especially when dealing with unsupervised scenarios such as Unsupervised Online General Continual Learning (UOGCL), where the learning agent has no prior knowledge of class boundaries or task change information. While previous research has focused on reducing forgetting in supervised setups, recent studies have shown that self-supervised learners are more resilient to forgetting. This paper proposes a novel approach that enhances memory usage for contrastive learning in UOGCL by defining and using stream-dependent data augmentations together with some implementation tricks. Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups, and reduces the gap between supervised and unsupervised continual learning. Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning.Comment: Accepted to BMVC'2

    Learning Representations on the Unit Sphere: Application to Online Continual Learning

    Full text link
    We use the maximum a posteriori estimation principle for learning representations distributed on the unit sphere. We derive loss functions for the von Mises-Fisher distribution and the angular Gaussian distribution, both designed for modeling symmetric directional data. A noteworthy feature of our approach is that the learned representations are pushed toward fixed directions, allowing for a learning strategy that is resilient to data drift. This makes it suitable for online continual learning, which is the problem of training neural networks on a continuous data stream, where multiple classification tasks are presented sequentially so that data from past tasks are no longer accessible, and data from the current task can be seen only once. To address this challenging scenario, we propose a memory-based representation learning technique equipped with our new loss functions. Our approach does not require negative data or knowledge of task boundaries and performs well with smaller batch sizes while being computationally efficient. We demonstrate with extensive experiments that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries. For reproducibility, we use the same training pipeline for every compared method and share the code at https://t.ly/SQTj.Comment: 16 pages, 4 figures, under revie

    MLBoost Revisited: A Faster Metric Learning Algorithm for Identity-Based Face Retrieval

    Full text link
    International audienceThis paper addresses the question of metric learning, i.e. the learning of a dissimilar-ity function from a set of similar/dissimilar example pairs. This domain plays an important role in many machine learning applications such as those related to face recognition or face retrieval. More specifically, this paper builds on the recent MLBoost method proposed by Negrel et al. [25]. MLBoost has been shown to perform very well for face retrieval tasks, but this algorithm relies on the computation of a weak metric which is very time consuming. This paper demonstrates how, by introducing sparsity into the weak projectors, the convergence time can be reduced up to a factor of 10Ă— compared to MLBoost, without any performance loss. The paper also introduces an explicit way to control the rank of the so-obtained metrics, allowing to fix in advance the dimension of the (projected) feature space. The proposed ideas are experimentally validated on a face retrieval task with three different signatures

    New metrics for analyzing continual learners

    Full text link
    Deep neural networks have shown remarkable performance when trained on independent and identically distributed data from a fixed set of classes. However, in real-world scenarios, it can be desirable to train models on a continuous stream of data where multiple classification tasks are presented sequentially. This scenario, known as Continual Learning (CL) poses challenges to standard learning algorithms which struggle to maintain knowledge of old tasks while learning new ones. This stability-plasticity dilemma remains central to CL and multiple metrics have been proposed to adequately measure stability and plasticity separately. However, none considers the increasing difficulty of the classification task, which inherently results in performance loss for any model. In that sense, we analyze some limitations of current metrics and identify the presence of setup-induced forgetting. Therefore, we propose new metrics that account for the task's increasing difficulty. Through experiments on benchmark datasets, we demonstrate that our proposed metrics can provide new insights into the stability-plasticity trade-off achieved by models in the continual learning environment.Comment: 6 pages, presented at MIRU 202

    Evaluation of Second-order Visual Features for Land-Use Classification

    Get PDF
    International audienceThis paper investigates the use of recent visual features based on second-order statistics, as well as new pro- cessing techniques to improve the quality of features. More specifically, we present and evaluate Fisher Vectors (FV), Vec- tors of Locally Aggregated Descriptors (VLAD), and Vectors of Locally Aggregated Tensors (VLAT). These techniques are combined with several normalization techniques, such as power law normalization and orthogonalisation/whitening of descriptor spaces. Results on the UC Merced land use dataset shows the relevance of these new methods for land-use classification, as well as a significant improvement over Bag-of-Words

    Optimal representation for searching the image databases heritage

    No full text
    Depuis plusieurs décennies, le développement des technologies de numérisation et de stockage ont permis la mise en œuvre de nombreux projets de numérisation du patrimoine culturel.L'approvisionnement massif et continu de ces bases de données numériques du patrimoine culturel entraîne de nombreux problèmes d'indexation.En effet, il n'est plus possible d'effectuer une indexation manuelle de toutes les données.Pour indexer et rendre accessible facilement les données, des méthodes d'indexation automatique et d'aide à l'indexation se sont développées depuis plusieurs années.Cependant, les méthodes d'indexation automatique pour les documents non-textuels (image, vidéo, son, modèle 3D, …) sont encore complexes à mettre en œuvre pour de grands volumes de données.Dans cette thèse, nous nous intéressons en particulier à l'indexation automatique d'images.Pour effectuer des tâches d'indexation automatique ou d'aide à l'indexation, il est nécessaire de construire une méthode permettant d'évaluer la similarité entre deux images.Nos travaux sont basés sur les méthodes à signatures d'image ; ces méthodes consistent à résumer le contenu visuel de chaque image dans une signature (vecteur unique), puis d'utiliser ces signatures pour calculer la similarité entre deux images.Pour extraire les signatures, nous utilisons la chaîne d'extraction suivante : en premier, nous extrayons de l'image un grande nombre de descripteurs locaux ; puis nous résumons l'ensemble de ces descripteurs dans une signature de grande dimension ; enfin nous réduisons fortement la dimension de la signature.Les signatures de l'état de l'art basées sur cette chaîne d'extraction permettent d'obtenir de très bonnes performance en indexation automatique et en aide à l'indexation.Cependant, les méthodes de l'état de l'art ont généralement de forts coûts mémoires et calculatoires qui rendent impossible leurs mise en œuvre sur des grands volumes de données.Dans cette thèse, notre objectif est double : d'une part nous voulons améliorer les signatures d'images pour obtenir de très bonnes performances dans les problèmes d'indexation automatique ; d'autre part, nous voulons réduire les coûts de la chaîne de traitement, pour permettre le passage à l'échelle.Nous proposons des améliorations d'une signature d'image de l'état de l'art nommée VLAT (Vectors of Locally Aggregated Tensors).Ces améliorations permettent de rendre la signature plus discriminante tout en réduisant sa dimension.Pour réduire la dimension des signatures, nous effectuons une projection linéaire de la signature dans un espace de petite dimension.Nous proposons deux méthodes pour obtenir des projecteurs de réduction de dimension tout en conservant les performances des signatures d'origine.Notre première méthode consiste à calculer les projecteurs qui permettent d'approximer le mieux possible les scores de similarités entre les signatures d'origine.La deuxième méthode est basée sur le problème de recherche de quasi-copies ; nous calculons les projecteurs qui permettent de respecter un ensemble de contraintes sur le rang des images dans la recherche par rapport à l'image requête.L'étape la plus coûteuse de la chaîne d'extraction est la réduction de dimension de la signature à cause de la grande dimension des projecteurs.Pour les réduire, nous proposons d'utiliser des projecteurs creux en introduisant une contrainte de parcimonie dans nos méthodes de calcul des projecteurs.Comme il est généralement complexe de résoudre un problème d'optimisation avec une contrainte de parcimonie stricte, nous proposons pour chacun des problèmes une méthode pour obtenir une approximation des projecteurs creux recherchés.L'ensemble de ces travaux font l'objet d'expériences montrant l'intérêt pratique des méthodes proposées par comparaison avec les méthodes de l'état de l'art.In the last decades, the development of scanning and storing technologies resulted in the development of many projects of cultural heritage digitization.The massive and continuous flow of numerical data in cultural heritage databases causes many problems for indexing.Indeed, it is no longer possible to perform a manual indexing of all data.To index and ease the access to data, many methods of automatic and semi-automatic indexing have been proposed in the last years.The current available methods for automatic indexing of non-textual documents (images, video, sound, 3D model, ...) are still too complex to implement for large volumes of data.In this thesis, we focus on the automatic indexing of images.To perform automatic or semi-automatic indexing, it is necessary to build an automatic method for evaluating the similarity between two images.Our work is based on image signature methods ; these methods involve summarising the visual content of each image in a signature (single vector), and then using these signatures to compute the similarity between two images.To extract the signatures, we use the following pipeline: First, we extract a large number of local descriptors of the image; Then we summarize all these descriptors in a large signature; Finally, we strongly reduce the dimensionality of the resulting signature.The state of the art signatures based on this pipeline provide very good performance in automatic indexing.However, these methods generally incur high storage and computational costs that make their implementation impossible on large volumes of data.In this thesis, our goal is twofold : First, we wish to improve the image signatures to achieve very good performance in automatic indexing problems ; Second, we want to reduce the cost of the processing chain to enable scalability.We propose to improve an image signature of the state of the art named VLAT (Vectors of Locally Aggregated Tensors).Our improvements increase the discriminative power of the signature.To reduce the size of the signatures, we perform linear projections of the signatures in a lower dimensional space.We propose two methods to compute the projectors while maintaining the performance of the original signatures.Our first approach is to compute the projectors that best approximate the similarities between the original signatures.The second method is based on the retrieval of quasi-copies; We compute the projectors that meet the constraints on the rank of retrieved images with respect to the query image.The most expensive step of the extraction pipeline is the dimentionality reduction step; these costs are due to the large dimentionality of the projectors.To reduce these costs, we propose to use sparse projectors by introducing a sparsity constraint in our methods.Since it is generally complex to solve an optimization problem with a strict sparsity constraint, we propose for each problem a method for approximating sparse projectors.This thesis work is the subject of experiments showing the practical value of the proposed methods in comparison with existing method

    Behaviour of Li isotopes during regolith formation on granite (Massif Central, France): Controls on the dissolved load in water, saprolite, soil and sediment

    No full text
    International audienceLithium (Li) contents and isotopes were studied in all environments of a small river catchment draining granite in the Margeride mountains of the French Massif Central. This covered surface waters, primary and accessory minerals of the granites, the whole rock, and soil and sediment samples developed in the catchment, completed with regional data for mineral waters and rainwater. The integrated investigation aimed at evaluating the potential of Li isotopes as effective tracers of water/rock interaction processes within a granitic environment. The delta Li-7 values and Li concentrations were measured on sediment- and soil samples, following standard acid-dissolution procedures and chemical purification of Li using the cationic exchange resin protocol in a clean lab. Lithium-isotope compositions were measured with a Neptune MC-ICP-MS and Li concentrations by ICP-MS. The samples represented different stages of granite weathering, including fresh granite, weathered-rock, surface saprolite, and sediments in riverbanks and fields bordering the streams. The extent of Li mobility during granite weathering was first evaluated through determining the percentage change relative to Ti, with a range from - 31 to - 66% in the collected samples. The weathered-rock was depleted by - 47% for Li with negative delta Li-7 values ranging from -1.9 to - 3.4 parts per thousand. Soil to riverbank sediment samples were characterized by less negative delta Li-7 values, indicating that Li is enriched in soil with fractionation of Li isotopes and changes of the mineral abundance in the samples. To complement this first view, we i) Modelled the theoretical Li isotope signature of water interacting with granite, using a weathering model based on dissolution; ii) Applied an atmospheric-input correction to surface waters; iii) Applied a Raleigh equation for modelling the Li isotope fractionation when compared with corrected surface water and mineral water; and iv) Compared Li isotopes with Sr-isotope data in a larger weathering framework
    corecore